首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   223篇
  免费   3篇
  国内免费   7篇
电工技术   3篇
综合类   7篇
化学工业   7篇
金属工艺   2篇
机械仪表   5篇
建筑科学   5篇
能源动力   4篇
轻工业   1篇
水利工程   3篇
石油天然气   1篇
无线电   19篇
一般工业技术   27篇
冶金工业   2篇
原子能技术   1篇
自动化技术   146篇
  2024年   1篇
  2023年   2篇
  2022年   1篇
  2021年   5篇
  2020年   4篇
  2019年   8篇
  2018年   5篇
  2017年   6篇
  2016年   13篇
  2015年   13篇
  2014年   9篇
  2013年   21篇
  2012年   10篇
  2011年   20篇
  2010年   15篇
  2009年   10篇
  2008年   18篇
  2007年   21篇
  2006年   18篇
  2005年   5篇
  2004年   7篇
  2002年   1篇
  2001年   4篇
  2000年   1篇
  1999年   2篇
  1998年   3篇
  1996年   1篇
  1995年   2篇
  1994年   3篇
  1993年   2篇
  1985年   1篇
  1982年   1篇
排序方式: 共有233条查询结果,搜索用时 31 毫秒
101.
Cross-over designs are used extensively for experiments in many fields. If the n subjects are relatively scarce compared to the t treatments then universally optimal designs do not exist under these restrictions and a computational procedure is usually required to select the design. This arises, for example, if the subjects comprise several animals which are in short supply, due perhaps to weight or age limitations. It is shown that cyclic cross-over designs are available that have lower average variances for direct and carry-over elementary treatment contrasts than other cyclic cross-over designs described in the literature. Examples of these improved designs are given for typical values of t and n. It is further shown that in these circumstances it is sensible to guard against a choice of design that can become disconnected if a few observations are lost during the experimentation. These points are illustrated in detail by considering the selection of a cross-over design for an experiment involving seven treatments applied to four subjects.  相似文献   
102.
Driving involves multiple cognitive processes that are influenced by a dynamic external environment and internal feedback loops. These processes are typically studied in a simulator environment to capture time-dependent driver performance measures. The primary objective of this research is to show that data analysis techniques that ignore or improperly treat time-dependent covariates will lead to erroneous estimates and conclusions. This is demonstrated with a driving simulator study that was used to test whether a significant decrease in performance occurs in the presence of auditory and visual distractions. A total of 28 drivers participated in a 2 (age)?×?7 (strategy) repeated measures experiment. The response variable—accelerator release time—was analysed with and without consideration of time-dependent covariates. Using the inverse headway distance as a time-dependent covariate corrected logically inconsistent results obtained when the covariate was ignored. This indicates that ignoring covariates can actually lead to inappropriate design or policy implications.  相似文献   
103.
When sensing its environment, an agent often receives information that only partially describes the current state of affairs. The agent then attempts to predict what it has not sensed, by using other pieces of information available through its sensors. Machine learning techniques can naturally aid this task, by providing the agent with the rules to be used for making these predictions. For this to happen, however, learning algorithms need to be developed that can deal with missing information in the learning examples in a principled manner, and without the need for external supervision. We investigate this problem herein.We show how the Probably Approximately Correct semantics can be extended to deal with missing information during both the learning and the evaluation phase. Learning examples are drawn from some underlying probability distribution, but parts of them are hidden before being passed to the learner. The goal is to learn rules that can accurately recover information hidden in these learning examples. We show that for this to be done, one should first dispense the requirement that rules should always make definite predictions; “don't know” is sometimes necessitated. On the other hand, such abstentions should not be done freely, but only when sufficient information is not present for definite predictions to be made. Under this premise, we show that to accurately recover missing information, it suffices to learn rules that are highly consistent, i.e., rules that simply do not contradict the agent's sensory inputs. It is established that high consistency implies a somewhat discounted accuracy, and that this discount is, in some defined sense, unavoidable, and depends on how adversarially information is hidden in the learning examples.Within our proposed learning model we prove that any PAC learnable class of monotone or read-once formulas is also learnable from incomplete learning examples. By contrast, we prove that parities and monotone-term 1-decision lists, which are properly PAC learnable, are not properly learnable under the new learning model. In the process of establishing our positive and negative results, we re-derive some basic PAC learnability machinery, such as Occam's Razor, and reductions between learning tasks. We finally consider a special case of learning from partial learning examples, where some prior bias exists on the manner in which information is hidden, and show how this provides a unified view of many previous learning models that deal with missing information.We suggest that the proposed learning model goes beyond a simple extension of supervised learning to the case of incomplete learning examples. The principled and general treatment of missing information during learning, we argue, allows an agent to employ learning entirely autonomously, without relying on the presence of an external teacher, as is the case in supervised learning. We call our learning model autodidactic to emphasize the explicit disassociation of this model from any form of external supervision.  相似文献   
104.
This paper proposes a new method for exploratory analysis and the interpretation of latent structures. The approach is named missing-data methods for exploratory data analysis (MEDA). The MEDA approach can be applied in combination with several models, including Principal Components Analysis (PCA), Factor Analysis (FA) and Partial Least Squares (PLS). It can be seen as a substitute of rotation methods with better properties associated: it is more accurate than rotation methods in the detection of relationships between pairs of variables, it is robust to the overestimation of the number of PCs and it does not depend on the normalization of the loadings. MEDA is useful to infer the structure in the data and also to interpret the contribution of each latent variable. The interpretation of PLS models with MEDA, including variables selection, may be specially valuable for the chemometrics community. The use of MEDA with PCA and PLS models is demonstrated with several simulated and real examples.  相似文献   
105.
符欲梅  朱芳  昝昕武 《传感技术学报》2012,25(12):1706-1710
针对桥梁健康监测系统中采集数据具有小样本、非线性且时序的特点,提出一种基于支持向量机的残缺数据填补方法,在分析数据的自相关性基础上,利用支持向量回归机原理,选择适当维数的样本作为支持向量机的输入向量,据此进行了残缺数据的预测;并与BP神经网络的填补效果相比较,实验结果显示了支持向量机在更小样本情况下填补残缺数据的优势和强泛化能力。  相似文献   
106.
Joint models for longitudinal and time-to-event data have recently attracted a lot of attention in statistics and biostatistics. Even though these models enjoy a wide range of applications in many different statistical fields, they have not yet found their rightful place in the toolbox of modern applied statisticians mainly due to the fact that they are rather computationally intensive to fit. The main difficulty arises from the requirement for numerical integration with respect to the random effects. This integration is typically performed using Gaussian quadrature rules whose computational complexity increases exponentially with the dimension of the random-effects vector. A solution to overcome this problem is proposed using a pseudo-adaptive Gauss-Hermite quadrature rule. The idea behind this rule is to use information for the shape of the integrand by separately fitting a mixed model for the longitudinal outcome. Simulation studies show that the pseudo-adaptive rule performs excellently in practice, and is considerably faster than the standard Gauss-Hermite rule.  相似文献   
107.
风向传感器是自动气象站的重要组成元件,由于其运行数据存在的噪声与缺失问题导致测量误差增加,因此设计基于非线性拟合的自动气象站风向传感器校准控制方法。分析自动气象站风向传感器的组成结构和工作原理,构建自动气象站等效模型。采用循环采集的方式获取风向传感器的实时运行数据,通过滤波、缺失补偿等步骤完成数据预处理。以数据预处理结合为基础,利用非线性拟合技术计算风向传感器误差,通过装设风向传感器校准控制器,完成风向传感器校准控制。实验结果表明,所设计方法与传统校准控制方法相比传感器的测量误差降低了0.759°,误差变化率有所下降,即证明了该方法的自动气象站风向传感器校准控制效果好。  相似文献   
108.
随着数据分析研究的兴起,数据预处理越来越得到研究者的重视,其中缺失数据填补问题的重要性也逐渐显现。在ROUSTIDA数据补齐算法的基础上,针对具有关键属性的重复数据的特点,文中提出了一种改进的ROUSTIDA算法——Key&Rpt_RS算法。Key&Rpt_RS算法继承了ROUSTIDA算法的优势,同时考虑了目标数据的重复性特点,分析了关键属性对填补效果的影响,得到了更加准确且有效的填补结果。  相似文献   
109.
Effect of lameness on culling in dairy cows   总被引:2,自引:0,他引:2  
The purpose of this study was to assess the effect of lameness on dairy cow survival. Cox's proportional hazards regression models were fitted to single-lactation data from 2520 cows in 2 New York State dairy herds. Models were controlled for the time-independent effects of parity, projected milk yield, and calving season, and for the time-dependent effects of lameness and culling. Other common diseases were found to be nonconfounding and so were not included in any of the final models. Survival was measured as the time from calving until death or sale. Cows were censored if they reached the start of the next lactation or end of the study, whichever occurred first. All models were stratified by herd. For all lameness diagnoses combined, survival in the herd decreased for those cows becoming lame during the first half of lactation, with a hazard ratio of up to 2 times that of a nonlame cow. Foot rot diagnosed during the second or third months of lactation decreased survival during the same time period (hazard ratio=5.1; 95% confidence interval=1.6 to 16.2). Sole ulcers diagnosed in the first 4 mo of lactation decreased survival in several subsequent periods in which the strongest association was between diagnosis in the third and fourth months of lactation and exit from the herd during that same period (hazard ratio=2.7; 95% confidence interval=1.3 to 6.0). Foot warts were not associated with decreased survival in this analysis. Lameness was never associated with increased survival in any of the models.  相似文献   
110.
何云  皮德常 《计算机科学》2015,42(11):251-255, 283
基因表达数据时常出现缺失,阻碍了对基因表达的研究。提出了一种新的相似性度量方案——精简关联度,在此基础上,又提出了基于精简关联度的缺失数据迭代填补算法(RKNNimpute)。精简关联度是对灰色关联度的一种改进,能达到与灰色关联度同样的效果,却显著降低了算法的时间复杂度。RKNNimpute算法以精简关联度作为相似度量,将填补后的基因扩充到近邻的候选基因集,通过迭代的方式填补其他缺失数据,提高了算法的填补效果和性能。选用时序、非时序、混合等不同类型的基因表达数据集进行了大量实验来评估RKNNimpute算法的性能。实验结果表明,精简关联度是一种高效的距离度量方法,所提出的RKNNimpute算法优于常规填补算法。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号